“No one should believe that the answers of Artificial Intelligence come from God, that they have a human creator. It is crucial to know Him”

The race for Artificial Intelligence (AI), which pits the United States against China while Europe seems to watch from afar, marks the beginning of a new era and deserves an original approach from Bruno Maçães, former Secretary of State for European Affairs. The now senior consultant at Flint Global in London, where he advises companies in areas such as geopolitics and technology, mixes the two concepts in his new book about the superpowers that want to control the world, through AI systems that, he guarantees, will very soon be omnipresent in our lives. In this interview with VISÃO, we compare the great conflicts of today with the near future, in light of the ideas developed by this PhD in Political Science from Harvard University in Construtores de Mundos – A Tecnologia e a Nova Geopolítica (World Builders – Technology and New Geopolitics) (Temas e Debates, 352 pages, €19.90), which will be launched in Portugal this Thursday, the 19th, at the Lisbon Book Fair.
Riding the wave of technological advances, where are the new worlds of geopolitics taking us?
To very unexpected, very strange and, potentially, very dangerous places. My book has a bit of the Matrix behind it, as it addresses the creation of artificial or virtual worlds. The difference is that it is our species, and especially the great superpowers, that build these virtual worlds, because they increasingly realize that true power lies in building a world in which others are forced to live. The power is even greater if others do not fully understand that these worlds are artificial and that they are controlled by the great superpowers.
Superpowers have always tried to control the world, but technological innovations now offer more advanced tools. Is this the danger?
In the book, the concepts of geopolitics and technology are practically synonymous, in the sense that the creation of virtual worlds, enabled by technology, is a radical and profound form of geopolitical power. Even military technology is a virtual world, of computers, satellites, communications, and Artificial Intelligence, as we have seen in both Ukraine and Israel. I contrast the old geopolitics, linked to the control of physical territory, with a new geopolitics, which has to do with the creation of virtual territory.
When Donald Trump talks about annexing Greenland and Canada, he is coveting minerals for making microchips, which are essential to winning the AI race. Is this the new big weapon in the US and China’s dispute?
That is one explanation, yes. Another is that Donald Trump has a kind of fantasy that almost never ends up being put into practice. The last few months have reinforced my thesis that global power will be determined by the capacity for invention, and not so much by the somewhat outdated idea of controlling certain geographies or materials. We don't even know exactly which materials will be the most critical, because innovation is advancing at a dizzying pace. One relevant example is the trade and technology war with China, namely China's access to American semiconductors and the US's access to Chinese rare earths and their sophisticated refining and production system.
How important are chips?
In the book, I symbolically compare them to the old military bases or strategic points of the old Portuguese Empire, such as Malacca, Ormuz, and Goa. This is because they are a kind of transition point, in this case between the physical world and the virtual world. Whoever controls the chips controls access to these virtual worlds. They are like tiny grains of sand capable of thinking at the speed of light; that is the miracle of modern chips.
Is China's DeepSeek generative AI system, unveiled earlier this year, a sign of the fierce competition the US will face?
What struck me about DeepSeek, first of all, was that no one on the research team had studied outside of China. This means that these capabilities have already become very indigenous. We need to be prepared for research assistant-style chatbots to spread, perhaps as early as this year, to financial services, education, video production, advertising, but also to industry. In other words, the robots will be equipped with these AI models that we are already using and that will be present everywhere. Soon, they will start to shape the minds of our students and our children and even our public policy, because ministries will start using them. That is why it is very interesting to know who designed the models and based on what values.
Society doesn't seem to value consequences, but behind AI there will be a programmer and, at the top, someone in control.
The other day, I asked DeepSeek to rank countries according to their level of evil. Interestingly, it told me that the most evil country in the world was Russia, which may be a little surprising, but it really isn’t, because DeepSeek is using data from the internet, whose information comes mostly from the West. But it would be very easy for the creators to manipulate the data, which means that no one should believe that the answers come from God; that would be incredibly naive. These are answers that come from a human creator, with his inclinations, his preferences, his values. But they will have an infinitely greater capacity to shape our minds than social networks. That’s why it’s a crucial issue for Europe and for all of us to know who designs the models, based on what priorities and for what purposes, so that we don’t live in a kind of Matrix built by China or the US.
Regulation may be a means of mitigating these perverse effects, but is it possible for Europe to completely protect itself from the potential risks of systems produced in China or the USA?
That was the terrible mistake made 25 years ago, when Europe thought that the big internet companies could be American. What quickly became clear is that it is not possible to regulate American companies. On the contrary, they have a lot of influence over European regulators. It would be possible to regulate European companies and I hope that lesson has been learned. There are signs in that direction, but they are still insufficient. Unfortunately, I see two types of hesitation. On the one hand, Europe continues to think that American and European companies are more or less the same, which I think is a terrible mistake. On the other hand, Europe continues to be very consumer-focused and does not appreciate big companies, which have a certain monopoly power. Choices need to be made and it is already too late. It has been quite shocking to see that China and the US are fighting over AI models and that the Europeans are falling behind once again. The stakes are now much higher than with the internet. This is an internet on steroids, with capabilities multiplied by ten or a hundred.
Could it become a greater threat than nuclear weapons, in the sense that one State could use it against another?
The military impact of AI will undoubtedly be devastating. Wars of the future will probably be decided in a matter of hours, if not minutes, given the absolutely divine and overwhelming speed of target detection and destruction machines. We have seen some of this in Gaza and Ukraine. AI makes it possible to completely eliminate the ability to conceal targets, for example. A tank has no ability to hide, because AI can detect and destroy it in a few minutes. This is the future we are entering, and whoever is left behind will have no ability to resist. If a war breaks out in Taiwan, it will be decided by AI between the US and China. Whoever has the most advanced AI will win.
Taiwan is a point of contention, largely because of its dominance in AI, as it is a major supplier of chips to the US. Do you think that in this case the US would let the game continue as it is doing in Ukraine?
Clearly not. But when I was in Taiwan a year ago, I heard a lot of concern that the Taiwanese are looking at Ukraine as a kind of test of American commitment. But it’s different. On the one hand, the US sees China as a direct rival and is much less comfortable with acknowledging a Chinese advance than a Russian advance. On the other hand, there will be much more fear of confronting China.
You say that Donald Trump does not follow through on many of the promises or threats he makes. Has China already drawn its own conclusions about this pattern of behavior?
I think so. Now, while it is hard to be sure, I am convinced that if China sees little commitment from the US, it will not necessarily conclude that an invasion is a good solution. It will prefer to use this distancing to pressure the Taiwanese into accepting a rapprochement that, in the long run, will lead to reunification. That is more the Chinese method.
It has been quite shocking to see that Europeans are falling behind, once again, in the AI race. The stakes are now much higher than they were with the internet. This is the internet on steroids.
Which at one time the Americans also used, as it says in the book.
Yes, the Chinese method is much more about creating a system that slowly absorbs the rest of the world. They learned from the Americans. That's what the US did between 1890 and 1945. Over the course of 50 years, they created a world system that, in many cases, absorbs other countries, either willingly or unwillingly. From my experience also living in China, and writing books about China, I would say that's Beijing's preference. But that's not to say that we shouldn't prepare for a war in Taiwan, especially the US. I just see China as more interested in using its economic and technological power, in slowly building a world technological system that other countries will use and on which they will become dependent.
Creating the rules of the game.
Exactly, by creating the rules that others have to follow. From that point of view, there is a huge difference between China and Russia.
Vladimir Putin himself comes from a different school.
Clearly. Putin is more of a representative of what I call the old geopolitics. In his mind, that is the world that makes sense and he is trying to get back to it, with little success. What may be happening is that Russia ends up playing a role in reducing the power of the American empire. In the early 20th century, Germany wanted to become a world leader. It didn’t succeed, but it ended up helping the US achieve that goal, because it had a big impact on diminishing the capabilities of the British Empire. I sometimes think that the same thing could happen in the 21st century, with Russia posing challenges to American power and paving the way for a new Chinese world. That is a possibility. I hope readers will enjoy my book because it discusses several possibilities.
In March, VISÃO published an article from Time in which it was suggested that the development of AI would lead to a balance of forces such that the great powers would not use it for more devastating purposes, for fear of counterattacks of equal proportions, similar to what happened with nuclear weapons during the Cold War. Is it realistic to think of the same type of logic for AI?
Nuclear weapons, in this new era, are beginning to play an offensive role. In the case of Russia, this is very clear, and I think Israel would also be less adventurous if it did not have this resource. In terms of AI, it is even more difficult to follow the nuclear logic of the Cold War, because everything will be much faster and both sides will have little ability to know what is on the other side. There will be a huge incentive to be the first to strike, but the book has an optimistic message, because world-building can be a more effective form of power. And those who want maximum power can achieve it through construction, not destruction. This was the lesson of the American empire in the last century, which still persists and is very much absorbed by China.
If this is an optimistic view, will the day come when superpowers will advocate for AI non-proliferation, as they did for nuclear weapons?
I think it’s impossible. With DeepSeek, we’ve already seen that China has decided to release an open-source model, so these capabilities are now available worldwide. It’s going to be much harder to contain. But I think we shouldn’t be too pessimistic, because the promise of AI is also extraordinary. Maybe the rest of the world will finally find a recipe for development. Maybe we’ll have the solution to many of our social problems here. We’ll see. The consequences for employment could be dramatic, but there will also probably be a big impact on economic growth. By 2026, AI will be everywhere.
The ability to influence electoral campaigns, and not only, will be huge.
Absolutely. The internet has already revolutionized advertising, but AI will be able to direct it in an absolutely irresistible way. Even more dangerous will be the possibility of manipulating voters in ways they don’t even realize. We also have many children who use chatbots as their best friend to ask for advice, and that is something we are not prepared for as a society. Whoever controls these systems will have the ability to influence minds in a profound and invisible way that humanity has never known.
Any other warnings?
One piece of advice that would be useful for the new generations, perhaps as a way of trying to resist this a little, is to adopt stricter privacy standards. We have to maintain a certain secrecy. If information about our psychology is available online, we are extremely vulnerable to manipulation of various kinds. Therefore, for those who are starting to create their digital path, it would not be a bad idea to create several online personalities to keep a reserve of the deepest personality and thus not be exposed to all these instruments. Fernando Pessoa was right, who when referring to his heteronyms spoke of simulation, that is, Pessoa already lived in the metaverse long before we invented it.
Visao